There is an increasing interest in developing artificial intelligence (AI) systems to process and interpret electronic health records (EHRs). Natural language processing (NLP) powered by pretrained language models is the key technology for medical AI systems utilizing clinical narratives. However, there are few clinical language models, the largest of which trained in the clinical domain is comparatively small at 110 million parameters (compared with billions of parameters in the general domain). It is not clear how large clinical language models with billions of parameters can help medical AI systems utilize unstructured EHRs. In this study, we develop from scratch a large clinical language model - GatorTron - using >90 billion words of text (including >82 billion words of de-identified clinical text) and systematically evaluate it on 5 clinical NLP tasks including clinical concept extraction, medical relation extraction, semantic textual similarity, natural language inference (NLI), and medical question answering (MQA). We examine how (1) scaling up the number of parameters and (2) scaling up the size of the training data could benefit these NLP tasks. GatorTron models scale up the clinical language model from 110 million to 8.9 billion parameters and improve 5 clinical NLP tasks (e.g., 9.6% and 9.5% improvement in accuracy for NLI and MQA), which can be applied to medical AI systems to improve healthcare delivery. The GatorTron models are publicly available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og.
translated by 谷歌翻译
人工智能一直在全球转变产业和学术研究,研究软件开发也不例外。在研究软件开发生命周期的各个方面都应用了机器学习和深度学习,从新算法设计范例到软件开发过程。在本文中,我们讨论了我们对当今挑战和机会的看法,即AI在研究软件开发和工程师中展示了我们在佛罗里达大学的方法,正在为AI的新时代做好准备我们的劳动力。
translated by 谷歌翻译
As Artificial and Robotic Systems are increasingly deployed and relied upon for real-world applications, it is important that they exhibit the ability to continually learn and adapt in dynamically-changing environments, becoming Lifelong Learning Machines. Continual/lifelong learning (LL) involves minimizing catastrophic forgetting of old tasks while maximizing a model's capability to learn new tasks. This paper addresses the challenging lifelong reinforcement learning (L2RL) setting. Pushing the state-of-the-art forward in L2RL and making L2RL useful for practical applications requires more than developing individual L2RL algorithms; it requires making progress at the systems-level, especially research into the non-trivial problem of how to integrate multiple L2RL algorithms into a common framework. In this paper, we introduce the Lifelong Reinforcement Learning Components Framework (L2RLCF), which standardizes L2RL systems and assimilates different continual learning components (each addressing different aspects of the lifelong learning problem) into a unified system. As an instantiation of L2RLCF, we develop a standard API allowing easy integration of novel lifelong learning components. We describe a case study that demonstrates how multiple independently-developed LL components can be integrated into a single realized system. We also introduce an evaluation environment in order to measure the effect of combining various system components. Our evaluation environment employs different LL scenarios (sequences of tasks) consisting of Starcraft-2 minigames and allows for the fair, comprehensive, and quantitative comparison of different combinations of components within a challenging common evaluation environment.
translated by 谷歌翻译
Point-of-Care Ultrasound (POCUS) refers to clinician-performed and interpreted ultrasonography at the patient's bedside. Interpreting these images requires a high level of expertise, which may not be available during emergencies. In this paper, we support POCUS by developing classifiers that can aid medical professionals by diagnosing whether or not a patient has pneumothorax. We decomposed the task into multiple steps, using YOLOv4 to extract relevant regions of the video and a 3D sparse coding model to represent video features. Given the difficulty in acquiring positive training videos, we trained a small-data classifier with a maximum of 15 positive and 32 negative examples. To counteract this limitation, we leveraged subject matter expert (SME) knowledge to limit the hypothesis space, thus reducing the cost of data collection. We present results using two lung ultrasound datasets and demonstrate that our model is capable of achieving performance on par with SMEs in pneumothorax identification. We then developed an iOS application that runs our full system in less than 4 seconds on an iPad Pro, and less than 8 seconds on an iPhone 13 Pro, labeling key regions in the lung sonogram to provide interpretable diagnoses.
translated by 谷歌翻译
In recent years, deep learning has infiltrated every field it has touched, reducing the need for specialist knowledge and automating the process of knowledge discovery from data. This review argues that astronomy is no different, and that we are currently in the midst of a deep learning revolution that is transforming the way we do astronomy. We trace the history of astronomical connectionism from the early days of multilayer perceptrons, through the second wave of convolutional and recurrent neural networks, to the current third wave of self-supervised and unsupervised deep learning. We then predict that we will soon enter a fourth wave of astronomical connectionism, in which finetuned versions of an all-encompassing 'foundation' model will replace expertly crafted deep learning models. We argue that such a model can only be brought about through a symbiotic relationship between astronomy and connectionism, whereby astronomy provides high quality multimodal data to train the foundation model, and in turn the foundation model is used to advance astronomical research.
translated by 谷歌翻译
赤道等离子体气泡(EPB)是低密度血浆的羽毛,它们从F层的底部升至Exosphere。 EPB是无线电波闪烁的已知原因,可以降低与航天器的通信。我们构建了一个随机的森林回归剂,以预测和预测IBI处理器在船上检测到的EPB [0-1]的可能性。我们使用从2014年到2021年的8年群数据,并将数据从时间序列转换为5维空间,该空间包括纬度,经度,MLT,年份和年度。我们还增加了KP,F10.7厘米和太阳风速。关于地理位置,当地时间,季节和太阳活动的EPB的观察主要与现有工作一致,而链接的地磁活动尚不清楚。该预测的精度为88%,并且在EPB特异性时空尺度上的性能很好。这证明了XGBoost方法能够成功捕获群EPB的气候和每日变异性。由于电离层内的局部和随机特征,捕获每日方差长期以来一直逃避研究人员。我们利用Shapley值来解释该模型并深入了解EPB的物理学。我们发现,随着太阳能速度的增加,EPB的概率降低。我们还确定了EPB概率周围的尖峰。这两个见解直接源自XGBoost和Shapley技术。
translated by 谷歌翻译
脑小血管疾病的成像标记提供了有关脑部健康的宝贵信息,但是它们的手动评估既耗时又受到实质性内部和间际变异性的阻碍。自动化评级可能受益于生物医学研究以及临床评估,但是现有算法的诊断可靠性尚不清楚。在这里,我们介绍了\ textIt {血管病变检测和分割}(\ textit {v textit {where valdo?})挑战,该挑战是在国际医学图像计算和计算机辅助干预措施(MICCAI)的卫星事件中运行的挑战(MICCAI) 2021.这一挑战旨在促进大脑小血管疾病的小而稀疏成像标记的自动检测和分割方法的开发,即周围空间扩大(EPVS)(任务1),脑微粒(任务2)和预先塑造的鞋类血管起源(任务3),同时利用弱和嘈杂的标签。总体而言,有12个团队参与了针对一个或多个任务的解决方案的挑战(任务1 -EPVS 4,任务2 -Microbleeds的9个,任务3 -lacunes的6个)。多方数据都用于培训和评估。结果表明,整个团队和跨任务的性能都有很大的差异,对于任务1- EPV和任务2-微型微型且对任务3 -lacunes尚无实际的结果,其结果尤其有望。它还强调了可能阻止个人级别使用的情况的性能不一致,同时仍证明在人群层面上有用。
translated by 谷歌翻译
从自下而上的计算大脑反向构造的长期目标是,本文档的重点是杂色抽象层。首先用状态机器模型描述其操作,开发了基本的宏观体系结构。然后使用支持时间计算的尖峰神经元实现状态机函数。神经元模型基于活跃的尖峰树突,并反映了Hawkins/Numenta神经元模型。通过研究基准来证明该体系结构,其中代理使用宏collumn首先学习,然后导航2-D环境,其中包含随机放置的功能。环境在宏collumn中表示为标记的有向图,其中边缘连接特征,标签表示它们之间的相对位移。
translated by 谷歌翻译
我们表明,降噪扩散Probabalistic模型(DDPM),一类基于分数的生成模型,可用于制作逼真的假尚图像星系的模拟观测。我们的方法与从河外调查(探针)样品从斯隆数字巡天选择的测光和旋转曲线的观察和星系星系暗能量光谱仪器GRZ成像测试。主观上,当与来自真正的数据集中样品相比所产生的星系高度逼真。我们从深生成学习文学借款,使用'神父\“echet盗梦空间距离”,以测试主观和形态相似性量化的相似性。我们还引进了`合成银河的距离”这一指标来比较新兴的物理性质(如总大小,颜色和半光半径)地面实况父母和子女合成数据集。我们认为,DDPM方法产生比其它生成方法如对抗性网络(与更昂贵的推理的下侧)更清晰,更逼真的图像,并且可以用于产生适合于特定的成像调查合成的观察大样本。我们证明了DDPM的两个潜在的用途:(1)在准确喷漆遮蔽数据,如卫星路径,和(2)域转移,其中新的输入图像可以被处理以模仿DDPM训练集的属性。在这里,我们`DESI-FY”卡通形象为理念的域转移的证明。最后,我们建议适用于可在天文学界内有关这个主题的激励进一步的研究基于分数的办法的潜在应用。
translated by 谷歌翻译
尽管当前的视觉算法在许多具有挑战性的任务上都表现出色,但尚不清楚他们如何理解现实世界环境的物理动态。在这里,我们介绍了Physion,一种数据集和基准,用于严格评估预测物理场景如何随着时间而发展的能力。我们的数据集具有对各种物理现象的现实模拟,包括刚性和软体体碰撞,稳定的多对象配置,滚动,滑动和弹丸运动,因此比以前的基准提供了更全面的挑战。我们使用Physion来基准一套模型,其体系结构,学习目标,投入输出结构和培训数据各不相同。同时,我们在同一场景上获得了人类预测行为的精确测量,从而使我们能够直接评估任何模型能够近似人类行为的效果。我们发现,学习以对象为中心的表示的视觉算法通常优于那些没有人的表现,但仍未达到人类绩效。另一方面,绘制具有直接访问物理状态信息的神经网络的表现效果更好,并且做出与人类制作的预测更相似。这些结果表明,提取场景的物理表征是在视力算法中实现人类水平和类似人类的物理理解的主要瓶颈。我们已公开发布了所有数据和代码,以促进使用物理以完全可重现的方式对其他模型进行基准测试,从而使对视觉算法的进度进行系统的评估,这些算法像人们一样坚固地了解物理环境。
translated by 谷歌翻译